1 research outputs found

    Augmented L1 and Nuclear-Norm Models with a Globally Linearly Convergent Algorithm

    Full text link
    This paper studies the long-existing idea of adding a nice smooth function to "smooth" a non-differentiable objective function in the context of sparse optimization, in particular, the minimization of ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2, where xx is a vector, as well as the minimization of ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2, where XX is a matrix and ∣∣X∣∣∗||X||_* and ∣∣X∣∣F||X||_F are the nuclear and Frobenius norms of XX, respectively. We show that they can efficiently recover sparse vectors and low-rank matrices. In particular, they enjoy exact and stable recovery guarantees similar to those known for minimizing ∣∣x∣∣1||x||_1 and ∣∣X∣∣∗||X||_* under the conditions on the sensing operator such as its null-space property, restricted isometry property, spherical section property, or RIPless property. To recover a (nearly) sparse vector x0x^0, minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 returns (nearly) the same solution as minimizing ∣∣x∣∣1||x||_1 almost whenever α≥10∣∣x0∣∣∞\alpha\ge 10||x^0||_\infty. The same relation also holds between minimizing ∣∣X∣∣∗+1/(2α)∣∣X∣∣F2||X||_*+1/(2\alpha)||X||_F^2 and minimizing ∣∣X∣∣∗||X||_* for recovering a (nearly) low-rank matrix X0X^0, if α≥10∣∣X0∣∣2\alpha\ge 10||X^0||_2. Furthermore, we show that the linearized Bregman algorithm for minimizing ∣∣x∣∣1+1/(2α)∣∣x∣∣22||x||_1+1/(2\alpha)||x||_2^2 subject to Ax=bAx=b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a solution solution or any properties on AA. To our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.Comment: arXiv admin note: text overlap with arXiv:1207.5326 by other author
    corecore